A limited memory adaptive trust-region approach for large-scale unconstrained optimization
Authors
Abstract:
This study concerns with a trust-region-based method for solving unconstrained optimization problems. The approach takes the advantages of the compact limited memory BFGS updating formula together with an appropriate adaptive radius strategy. In our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-Newton formulas helps to handle large-scale problems. Theoretical analysis indicates that the new approach preserves the global convergence to a first-order stationary point under classical assumptions. Moreover, the superlinear and the quadratic convergence rates are also established under suitable conditions. Preliminary numerical experiments show the effectiveness of the proposed approach for solving large-scale unconstrained optimization problems.
similar resources
a limited memory adaptive trust-region approach for large-scale unconstrained optimization
this study concerns with a trust-region-based method for solving unconstrained optimization problems. the approach takes the advantages of the compact limited memory bfgs updating formula together with an appropriate adaptive radius strategy. in our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-newt...
full textLimited-Memory Reduced-Hessian Methods for Large-Scale Unconstrained Optimization
Limited-memory BFGS quasi-Newton methods approximate the Hessian matrix of second derivatives by the sum of a diagonal matrix and a fixed number of rank-one matrices. These methods are particularly effective for large problems in which the approximate Hessian cannot be stored explicitly. It can be shown that the conventional BFGS method accumulates approximate curvature in a sequence of expandi...
full textan adaptive nonmonotone trust region method for unconstrained optimization problems based on a simple subproblem
using a simple quadratic model in the trust region subproblem, a new adaptive nonmonotone trust region method is proposed for solving unconstrained optimization problems. in our method, based on a slight modification of the proposed approach in (j. optim. theory appl. 158(2):626-635, 2013), a new scalar approximation of the hessian at the current point is provided. our new proposed method is eq...
full textA Trust-region Method using Extended Nonmonotone Technique for Unconstrained Optimization
In this paper, we present a nonmonotone trust-region algorithm for unconstrained optimization. We first introduce a variant of the nonmonotone strategy proposed by Ahookhosh and Amini cite{AhA 01} and incorporate it into the trust-region framework to construct a more efficient approach. Our new nonmonotone strategy combines the current function value with the maximum function values in some pri...
full textLarge Scale Unconstrained Optimization
This paper reviews advances in Newton quasi Newton and conjugate gradi ent methods for large scale optimization It also describes several packages developed during the last ten years and illustrates their performance on some practical problems Much attention is given to the concept of partial separa bility which is gaining importance with the arrival of automatic di erentiation tools and of opt...
full textA retrospective trust-region method for unconstrained optimization
We introduce a new trust-region method for unconstrained optimization where the radius update is computed using the model information at the current iterate rather than at the preceding one. The update is then performed according to how well the current model retrospectively predicts the value of the objective function at last iterate. Global convergence to rstand second-order critical points i...
full textMy Resources
Journal title
volume 42 issue 4
pages 819- 837
publication date 2016-08-01
By following a journal you will be notified via email when a new issue of this journal is published.
Hosted on Doprax cloud platform doprax.com
copyright © 2015-2023